Connecting To The Server To Fetch The WebPage Elements!!....
MXPlank.com MXMail Submit Research Thesis Electronics - MicroControllers Contact us QuantumDDX.com



Search The Site





 

VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles


Simulations are an essential tool for advancing new algorithms in robot perception, learning, and evaluation. In the case of autonomous vehicles, experience in simulation is often significantly faster and safer than operation in the physical world. However, there exists a problem of scaling simulation engines to multiple sensor types.

A recent study on arXiv.org presents a multi-sensor, data-driven engine for autonomous vehicle simulation, perception, and learning.

The researchers develop novel view synthesis capabilities for 2D RGB cameras, 3D LiDARs, and event-based sensors. Real-world data is translated to a simulated perception-control API. End-to-end autonomous vehicle control policies are proposed using each sensor type and directly deployed on a full-scale vehicle.

Learned policies exhibit direct sim-to-real transfer and improved robustness than those trained solely on real-world data.

The learned policies can be directly transferred onboard a full-scale autonomous vehicle in the real world. Real-world experiments on a full-scale autonomous vehicle are conducted. The models demonstrate high performance and generalizability on complex tasks such as autonomous overtaking and avoidance of a partially observed dynamic agent.


A driverless car. Image credit: Steve Jurvetson via Flickr, CC BY 2.0



Simulation has the potential to transform the development of robust algorithms for mobile agents deployed in safety-critical scenarios. However, the poor photorealism and lack of diverse sensor modalities of existing simulation engines remain key hurdles towards realizing this potential. Here, we present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles. Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras, enabling the rapid generation of novel viewpoints in simulation and thereby enriching the data available for policy learning with corner cases that are difficult to capture in the physical world. Using VISTA, we demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle. The policies learned in VISTA exhibit sim-to-real transfer without modification and greater robustness than those trained exclusively on real-world data.